Information-Theoretic Lower Bounds on Bayes Risk in Decentralized Estimation
نویسندگان
چکیده
منابع مشابه
On Bayes Risk Lower Bounds
This paper provides a general technique for lower bounding the Bayes risk of statistical estimation, applicable to arbitrary loss functions and arbitrary prior distributions. A lower bound on the Bayes risk not only serves as a lower bound on the minimax risk, but also characterizes the fundamental limit of any estimator given the prior knowledge. Our bounds are based on the notion of f -inform...
متن کاملLower Bounds for Bayes Error Estimation
ÐWe give a short proof of the following result. Let X; Y be any distribution on N f0; 1g, and let X1; Y1; . . . ; Xn; Yn be an i.i.d. sample drawn from this distribution. In discrimination, the Bayes error L infg Pfg X 6 Y g is of crucial importance. Here we show that without further conditions on the distribution of X; Y , no rate-of-convergence results can be obtained. Let n X1; ...
متن کاملComplexity Theoretic Lower Bounds on Cryptographic Functions
Acknowledgments First of all it is a pleasure to thank my supervisor Hans Ulrich Simon for the great support and the nice atmosphere in his research group. He has been a reliable source of encouragement and adv ice through all my years at the Ruhr-Universität Bochum. Thanks also to all the nice people I met at the " Lehrstuhl für Mathematik und Informatik " and the " Lehrstuhl für Informationss...
متن کاملInformation-theoretic lower bounds for distributed statistical estimation with communication constraints
We establish lower bounds on minimax risks for distributed statistical estimation under a communication budget. Such lower bounds reveal the minimum amount of communication required by any procedure to achieve the centralized minimax-optimal rates for statistical estimation. We study two classes of protocols: one in which machines send messages independently, and a second allowing for interacti...
متن کاملInformation-theoretic lower bounds for convex optimization with erroneous oracles
We consider the problem of optimizing convex and concave functions with access to an erroneous zeroth-order oracle. In particular, for a given function x → f(x) we consider optimization when one is given access to absolute error oracles that return values in [f(x) − , f(x) + ] or relative error oracles that return value in [(1− )f(x), (1 + )f(x)], for some > 0. We show stark information theoret...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Information Theory
سال: 2017
ISSN: 0018-9448,1557-9654
DOI: 10.1109/tit.2016.2646342